skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Bence, James R."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract In ecological meta‐analyses, nonindependence among observed effect sizes from the same source paper is common. If not accounted for, nonindependence can seriously undermine inferences. We compared the performance of four meta‐analysis methods that attempt to address such nonindependence and the standard random‐effect model that ignores nonindependence. We simulated data with various types of within‐paper nonindependence, and assessed the standard deviation of the estimated mean effect size and Type I error rate of each method. Although all four methods performed substantially better than the standard random‐effects model that assumes independence, there were differences in performance among the methods. A two‐step method that first summarizes the multiple observed effect sizes per paper using a weighted mean and then analyzes the reduced data in a standard random‐effects model, and a robust variance estimation method performed consistently well. A hierarchical model with both random paper and study effects gave precise estimates but had a higher Type I error rates, possibly reflecting limitations of currently available meta‐analysis software. Overall, we advocate the use of the two‐step method with a weighted paper mean and the robust variance estimation method as reliable ways to handle within‐paper nonindependence in ecological meta‐analyses. 
    more » « less
  2. Abstract Despite the wide application of meta‐analysis in ecology, some of the traditional methods used for meta‐analysis may not perform well given the type of data characteristic of ecological meta‐analyses.We reviewed published meta‐analyses on the ecological impacts of global climate change, evaluating the number of replicates used in the primary studies (ni) and the number of studies or records (k) that were aggregated to calculate a mean effect size. We used the results of the review in a simulation experiment to assess the performance of conventional frequentist and Bayesian meta‐analysis methods for estimating a mean effect size and its uncertainty interval.Our literature review showed thatniandkwere highly variable, distributions were right‐skewed and were generally small (medianni = 5, mediank = 44). Our simulations show that the choice of method for calculating uncertainty intervals was critical for obtaining appropriate coverage (close to the nominal value of 0.95). Whenkwas low (<40), 95% coverage was achieved by a confidence interval (CI) based on thetdistribution that uses an adjusted standard error (the Hartung–Knapp–Sidik–Jonkman, HKSJ), or by a Bayesian credible interval, whereas bootstrap orzdistribution CIs had lower coverage. Despite the importance of the method to calculate the uncertainty interval, 39% of the meta‐analyses reviewed did not report the method used, and of the 61% that did, 94% used a potentially problematic method, which may be a consequence of software defaults.In general, for a simple random‐effects meta‐analysis, the performance of the best frequentist and Bayesian methods was similar for the same combinations of factors (kand mean replication), though the Bayesian approach had higher than nominal (>95%) coverage for the mean effect whenkwas very low (k < 15). Our literature review suggests that many meta‐analyses that usedzdistribution or bootstrapping CIs may have overestimated the statistical significance of their results when the number of studies was low; more appropriate methods need to be adopted in ecological meta‐analyses. 
    more » « less